101 research outputs found
A Scaling Law to Predict the Finite-Length Performance of Spatially-Coupled LDPC Codes
Spatially-coupled LDPC codes are known to have excellent asymptotic
properties. Much less is known regarding their finite-length performance. We
propose a scaling law to predict the error probability of finite-length
spatially-coupled ensembles when transmission takes place over the binary
erasure channel. We discuss how the parameters of the scaling law are connected
to fundamental quantities appearing in the asymptotic analysis of these
ensembles and we verify that the predictions of the scaling law fit well to the
data derived from simulations over a wide range of parameters. The ultimate
goal of this line of research is to develop analytic tools for the design of
spatially-coupled LDPC codes under practical constraints
On Distributed Storage Allocations for Memory-Limited Systems
In this paper we consider distributed allocation problems with memory
constraint limits. Firstly, we propose a tractable relaxation to the problem of
optimal symmetric allocations from [1]. The approximated problem is based on
the Q-error function, and its solution approaches the solution of the initial
problem, as the number of storage nodes in the network grows. Secondly,
exploiting this relaxation, we are able to formulate and to solve the problem
for storage allocations for memory-limited DSS storing and arbitrary memory
profiles. Finally, we discuss the extension to the case of multiple data
objects, stored in the DSS.Comment: Submitted to IEEE GLOBECOM'1
Tree-structure Expectation Propagation for Decoding LDPC codes over Binary Erasure Channels
Expectation Propagation is a generalization to Belief Propagation (BP) in two
ways. First, it can be used with any exponential family distribution over the
cliques in the graph. Second, it can impose additional constraints on the
marginal distributions. We use this second property to impose pair-wise
marginal distribution constraints in some check nodes of the LDPC Tanner graph.
These additional constraints allow decoding the received codeword when the BP
decoder gets stuck. In this paper, we first present the new decoding algorithm,
whose complexity is identical to the BP decoder, and we then prove that it is
able to decode codewords with a larger fraction of erasures, as the block size
tends to infinity. The proposed algorithm can be also understood as a
simplification of the Maxwell decoder, but without its computational
complexity. We also illustrate that the new algorithm outperforms the BP
decoder for finite block-siz
Boosting Handwriting Text Recognition in Small Databases with Transfer Learning
In this paper we deal with the offline handwriting text recognition (HTR)
problem with reduced training datasets. Recent HTR solutions based on
artificial neural networks exhibit remarkable solutions in referenced
databases. These deep learning neural networks are composed of both
convolutional (CNN) and long short-term memory recurrent units (LSTM). In
addition, connectionist temporal classification (CTC) is the key to avoid
segmentation at character level, greatly facilitating the labeling task. One of
the main drawbacks of the CNNLSTM-CTC (CLC) solutions is that they need a
considerable part of the text to be transcribed for every type of calligraphy,
typically in the order of a few thousands of lines. Furthermore, in some
scenarios the text to transcribe is not that long, e.g. in the Washington
database. The CLC typically overfits for this reduced number of training
samples. Our proposal is based on the transfer learning (TL) from the
parameters learned with a bigger database. We first investigate, for a reduced
and fixed number of training samples, 350 lines, how the learning from a large
database, the IAM, can be transferred to the learning of the CLC of a reduced
database, Washington. We focus on which layers of the network could be not
re-trained. We conclude that the best solution is to re-train the whole CLC
parameters initialized to the values obtained after the training of the CLC
from the larger database. We also investigate results when the training size is
further reduced. The differences in the CER are more remarkable when training
with just 350 lines, a CER of 3.3% is achieved with TL while we have a CER of
18.2% when training from scratch. As a byproduct, the learning times are quite
reduced. Similar good results are obtained from the Parzival database when
trained with this reduced number of lines and this new approach.Comment: ICFHR 2018 Conferenc
Turbo EP-based Equalization: a Filter-Type Implementation
This manuscript has been submitted to Transactions on Communications on
September 7, 2017; revised on January 10, 2018 and March 27, 2018; and accepted
on April 25, 2018
We propose a novel filter-type equalizer to improve the solution of the
linear minimum-mean squared-error (LMMSE) turbo equalizer, with computational
complexity constrained to be quadratic in the filter length. When high-order
modulations and/or large memory channels are used the optimal BCJR equalizer is
unavailable, due to its computational complexity. In this scenario, the
filter-type LMMSE turbo equalization exhibits a good performance compared to
other approximations. In this paper, we show that this solution can be
significantly improved by using expectation propagation (EP) in the estimation
of the a posteriori probabilities. First, it yields a more accurate estimation
of the extrinsic distribution to be sent to the channel decoder. Second,
compared to other solutions based on EP the computational complexity of the
proposed solution is constrained to be quadratic in the length of the finite
impulse response (FIR). In addition, we review previous EP-based turbo
equalization implementations. Instead of considering default uniform priors we
exploit the outputs of the decoder. Some simulation results are included to
show that this new EP-based filter remarkably outperforms the turbo approach of
previous versions of the EP algorithm and also improves the LMMSE solution,
with and without turbo equalization
- …